astronomical society
Moving object detection from multi-depth images with an attention-enhanced CNN
Shibukawa, Masato, Yoshida, Fumi, Yanagisawa, Toshifumi, Ito, Takashi, Kurosaki, Hirohisa, Yoshikawa, Makoto, Kamiya, Kohki, Jiang, Ji-an, Fraser, Wesley, Kavelaars, JJ, Benecchi, Susan, Verbiscer, Anne, Hatakeyama, Akira, O, Hosei, Ozaki, Naoya
One of the greatest challenges for detecting moving objects in the solar system from wide-field survey data is determining whether a signal indicates a true object or is due to some other source, like noise. Object verification has relied heavily on human eyes, which usually results in significant labor costs. In order to address this limitation and reduce the reliance on manual intervention, we propose a multi-input convolutional neural network integrated with a convolutional block attention module. This method is specifically tailored to enhance the moving object detection system that we have developed and used previously. The current method introduces two innovations. This first one is a multi-input architecture that processes multiple stacked images simultaneously. The second is the incorporation of the convolutional block attention module which enables the model to focus on essential features in both spatial and channel dimensions. These advancements facilitate efficient learning from multiple inputs, leading to more robust detection of moving objects. The performance of the model is evaluated on a dataset consisting of approximately 2,000 observational images. We achieved an accuracy of nearly 99% with AUC (an Area Under the Curve) of >0.99. These metrics indicate that the proposed model achieves excellent classification performance. By adjusting the threshold for object detection, the new model reduces the human workload by more than 99% compared to manual verification.
- North America > United States > Hawaii (0.04)
- North America > Canada > British Columbia > Vancouver Island > Capital Regional District > Victoria (0.04)
- Asia > Japan > Honshū > Kantō > Kanagawa Prefecture (0.04)
- (5 more...)
From Simulations to Surveys: Domain Adaptation for Galaxy Observations
Brauer, Kaley, Dash, Aditya Prasad, Vyas, Meet J., Salim, Ahmed, Massala, Stiven Briand
Large photometric surveys will image billions of galaxies, but we currently lack quick, reliable automated ways to infer their physical properties like morphology, stellar mass, and star formation rates. Simulations provide galaxy images with ground-truth physical labels, but domain shifts in PSF, noise, backgrounds, selection, and label priors degrade transfer to real surveys. We present a preliminary domain adaptation pipeline that trains on simulated TNG50 galaxies and evaluates on real SDSS galaxies with morphology labels (elliptical/spiral/irregular). We train three backbones (CNN, $E(2)$-steerable CNN, ResNet-18) with focal loss and effective-number class weighting, and a feature-level domain loss $L_D$ built from GeomLoss (entropic Sinkhorn OT, energy distance, Gaussian MMD, and related metrics). We show that a combination of these losses with an OT-based "top_$k$ soft matching" loss that focuses $L_D$ on the worst-matched source-target pairs can further enhance domain alignment. With Euclidean distance, scheduled alignment weights, and top-$k$ matching, target accuracy (macro F1) rises from $\sim$46% ($\sim$30%) at no adaptation to $\sim$87% ($\sim$62.6%), with a domain AUC near 0.5, indicating strong latent-space mixing.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Asia > Malaysia (0.04)
- Overview (0.68)
- Research Report (0.50)
Simulation-Based Pretraining and Domain Adaptation for Astronomical Time Series with Minimal Labeled Data
Gupta, Rithwik, Muthukrishna, Daniel, Audenaert, Jeroen
Astronomical time-series analysis faces a critical limitation: the scarcity of labeled observational data. We present a pre-training approach that leverages simulations, significantly reducing the need for labeled examples from real observations. Our models, trained on simulated data from multiple astronomical surveys (ZTF and LSST), learn generalizable representations that transfer effectively to downstream tasks. Using classifier-based architectures enhanced with contrastive and adversarial objectives, we create domain-agnostic models that demonstrate substantial performance improvements over baseline methods in classification, redshift estimation, and anomaly detection when fine-tuned with minimal real data. Remarkably, our models exhibit effective zero-shot transfer capabilities, achieving comparable performance on future telescope (LSST) simulations when trained solely on existing telescope (ZTF) data. Furthermore, they generalize to very different astronomical phenomena (namely variable stars from NASA's \textit{Kepler} telescope) despite being trained on transient events, demonstrating cross-domain capabilities. Our approach provides a practical solution for building general models when labeled data is scarce, but domain knowledge can be encoded in simulations.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > California > Alameda County > Fremont (0.04)
- North America > Canada (0.04)
- Asia > Middle East > Oman (0.04)
deep-REMAP: Probabilistic Parameterization of Stellar Spectra Using Regularized Multi-Task Learning
In the era of exploding survey volumes, traditional methods of spectroscopic analysis are being pushed to their limits. In response, we develop deep-REMAP, a novel deep learning framework that utilizes a regularized, multi-task approach to predict stellar atmospheric parameters from observed spectra. We train a deep convolutional neural network on the PHOENIX synthetic spectral library and use transfer learning to fine-tune the model on a small subset of observed FGK dwarf spectra from the MARVELS survey. We then apply the model to 732 uncharacterized FGK giant candidates from the same survey. When validated on 30 MARVELS calibration stars, deep-REMAP accurately recovers the effective temperature ($T_{\rm{eff}}$), surface gravity ($\log \rm{g}$), and metallicity ([Fe/H]), achieving a precision of, for instance, approximately 75 K in $T_{\rm{eff}}$. By combining an asymmetric loss function with an embedding loss, our regression-as-classification framework is interpretable, robust to parameter imbalances, and capable of capturing non-Gaussian uncertainties. While developed for MARVELS, the deep-REMAP framework is extensible to other surveys and synthetic libraries, demonstrating a powerful and automated pathway for stellar characterization.
- North America > United States (0.28)
- Europe > Germany > Lower Saxony > Gottingen (0.04)
Galaxy image simplification using Generative AI
Erukude, Sai Teja, Shamir, Lior
Modern digital sky surveys have been acquiring images of billions of galaxies. While these images often provide sufficient details to analyze the shape of the galaxies, accurate analysis of such high volumes of images requires effective automation. Current solutions often rely on machine learning annotation of the galaxy images based on a set of pre-defined classes. Here we introduce a new approach to galaxy image analysis that is based on generative AI. The method simplifies the galaxy images and automatically converts them into a ``skeletonized" form. The simplified images allow accurate measurements of the galaxy shapes and analysis that is not limited to a certain pre-defined set of classes. We demonstrate the method by applying it to galaxy images acquired by the DESI Legacy Survey. The code and data are publicly available. The method was applied to 125,000 DESI Legacy Survey images, and the catalog of the simplified images is publicly available.
- North America > United States > California (0.04)
- North America > United States > Texas (0.04)
- North America > United States > Ohio (0.04)
- (14 more...)
- Energy (1.00)
- Government > Regional Government > North America Government > United States Government (0.47)
- Information Technology > Security & Privacy (0.46)
The TESS Ten Thousand Catalog: 10,001 uniformly-vetted and -validated Eclipsing Binary Stars detected in Full-Frame Image data by machine learning and analyzed by citizen scientists
Kostov, Veselin B., Powell, Brian P., Fornear, Aline U., Di Fraia, Marco Z., Gagliano, Robert, Jacobs, Thomas L., de Lambilly, Julien S., Luca, Hugo A. Durantini, Majewski, Steven R., Omohundro, Mark, Orosz, Jerome, Rappaport, Saul A., Salik, Ryan, Short, Donald, Welsh, William, Alexandrov, Svetoslav, da Silva, Cledison Marcos, Dunning, Erika, Guhne, Gerd, Huten, Marc, Hyogo, Michiharu, Iannone, Davide, Lee, Sam, Magliano, Christian, Sharma, Manya, Tarr, Allan, Yablonsky, John, Acharya, Sovan, Adams, Fred, Barclay, Thomas, Montet, Benjamin T., Mullally, Susan, Olmschenk, Greg, Prsa, Andrej, Quintana, Elisa, Wilson, Robert, Balcioglu, Hasret, Kruse, Ethan, Collaboration, the Eclipsing Binary Patrol
The Transiting Exoplanet Survey Satellite (TESS) has surveyed nearly the entire sky in Full-Frame Image mode with a time resolution of 200 seconds to 30 minutes and a temporal baseline of at least 27 days. In addition to the primary goal of discovering new exoplanets, TESS is exceptionally capable at detecting variable stars, and in particular short-period eclipsing binaries which are relatively common, making up a few percent of all stars, and represent powerful astrophysical laboratories for deep investigations of stellar formation and evolution. We combed Sectors 1-82 of TESS Full-Frame Image data searching for eclipsing binary stars using a neural network that identified ~1.2 million stars with eclipse-like features. Of these, we have performed an in-depth analysis on ~60,000 targets using automated methods and manual inspection by citizen scientists. Here we present a catalog of 10001 uniformly-vetted and -validated eclipsing binary stars that passed all our ephemeris and photocenter tests, as well as complementary visual inspection. Of these, 7936 are new eclipsing binaries while the remaining 2065 are known systems for which we update the published ephemerides. We outline the detection and analysis of the targets, discuss the properties of the sample, and highlight potentially interesting systems. Finally, we also provide a list of ~900,000 unvetted and unvalidated targets for which the neural network found eclipse-like features with a score higher than 0.9, and for which there are no known eclipsing binaries within a sky-projected separation of a TESS pixel (~21 arcsec).
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- North America > United States > Gulf of Mexico > Central GOM (0.04)
- (13 more...)
- Government > Space Agency (0.46)
- Government > Regional Government (0.46)
DART-Vetter: A Deep LeARning Tool for automatic triage of exoplanet candidates
Fiscale, Stefano, Inno, Laura, Rotundi, Alessandra, Ciaramella, Angelo, Ferone, Alessio, Magliano, Christian, Cacciapuoti, Luca, Kostov, Veselin, Quintana, Elisa, Covone, Giovanni, Tomajoli, Maria Teresa Muscari, Saggese, Vito, Tonietti, Luca, Vanzanella, Antonio, Della Corte, Vincenzo
In the identification of new planetary candidates in transit surveys, the employment of Deep Learning models proved to be essential to efficiently analyse a continuously growing volume of photometric observations. To further improve the robustness of these models, it is necessary to exploit the complementarity of data collected from different transit surveys such as NASA's Kepler, Transiting Exoplanet Survey Satellite (TESS), and, in the near future, the ESA PLAnetary Transits and Oscillation of stars (PLATO) mission. In this work, we present a Deep Learning model, named DART-Vetter, able to distinguish planetary candidates (PC) from false positives signals (NPC) detected by any potential transiting survey. DART-Vetter is a Convolutional Neural Network that processes only the light curves folded on the period of the relative signal, featuring a simpler and more compact architecture with respect to other triaging and/or vetting models available in the literature. We trained and tested DART-Vetter on several dataset of publicly available and homogeneously labelled TESS and Kepler light curves in order to prove the effectiveness of our model. Despite its simplicity, DART-Vetter achieves highly competitive triaging performance, with a recall rate of 91% on an ensemble of TESS and Kepler data, when compared to Exominer and Astronet-Triage. Its compact, open source and easy to replicate architecture makes DART-Vetter a particularly useful tool for automatizing triaging procedures or assisting human vetters, showing a discrete generalization on TCEs with Multiple Event Statistic (MES) > 20 and orbital period < 50 days.
- Europe > Italy > Campania > Naples (0.04)
- South America > Uruguay > Maldonado > Maldonado (0.04)
- North America > United States > Maryland > Prince George's County > Greenbelt (0.04)
- (5 more...)
- Government > Space Agency (0.34)
- Government > Regional Government > North America Government > United States Government (0.34)
Savage-Dickey density ratio estimation with normalizing flows for Bayesian model comparison
Lin, Kiyam, Polanska, Alicja, Piras, Davide, Mancini, Alessio Spurio, McEwen, Jason D.
A core motivation of science is to evaluate which scientific model best explains observed data. Bayesian model comparison provides a principled statistical approach to comparing scientific models and has found widespread application within cosmology and astrophysics. Calculating the Bayesian evidence is computationally challenging, especially as we continue to explore increasingly more complex models. The Savage-Dickey density ratio (SDDR) provides a method to calculate the Bayes factor (evidence ratio) between two nested models using only posterior samples from the super model. The SDDR requires the calculation of a normalised marginal distribution over the extra parameters of the super model, which has typically been performed using classical density estimators, such as histograms. Classical density estimators, however, can struggle to scale to high-dimensional settings. We introduce a neural SDDR approach using normalizing flows that can scale to settings where the super model contains a large number of extra parameters. We demonstrate the effectiveness of this neural SDDR methodology applied to both toy and realistic cosmological examples. For a field-level inference setting, we show that Bayes factors computed for a Bayesian hierarchical model (BHM) and simulation-based inference (SBI) approach are consistent, providing further validation that SBI extracts as much cosmological information from the field as the BHM approach. The SDDR estimator with normalizing flows is implemented in the open-source harmonic Python package.
- Europe > Switzerland (0.04)
- Europe > United Kingdom (0.04)
Adaptive Detection of Fast Moving Celestial Objects Using a Mixture of Experts and Physical-Inspired Neural Network
Jia, Peng, Li, Ge, Cheng, Bafeng, Li, Yushan, Sun, Rongyu
Fast moving celestial objects are characterized by velocities across the celestial sphere that significantly differ from the motions of background stars. In observational images, these objects exhibit distinct shapes, contrasting with the typical appearances of stars. Depending on the observational method employed, these celestial entities may be designated as near-Earth objects or asteroids. Historically, fast moving celestial objects have been observed using ground-based telescopes, where the relative stability of stars and Earth facilitated effective image differencing techniques alongside traditional fast moving celestial object detection and classification algorithms. However, the growing prevalence of space-based telescopes, along with their diverse observational modes, produces images with different properties, rendering conventional methods less effective. This paper presents a novel algorithm for detecting fast moving celestial objects within star fields. Our approach enhances state-of-the-art fast moving celestial object detection neural networks by transforming them into physical-inspired neural networks. These neural networks leverage the point spread function of the telescope and the specific observational mode as prior information; they can directly identify moving fast moving celestial objects within star fields without requiring additional training, thereby addressing the limitations of traditional techniques. Additionally, all neural networks are integrated using the mixture of experts technique, forming a comprehensive fast moving celestial object detection algorithm. We have evaluated our algorithm using simulated observational data that mimics various observations carried out by space based telescope scenarios and real observation images. Results demonstrate that our method effectively detects fast moving celestial objects across different observational modes.
- North America > United States > Alabama > Lamar County (0.45)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.04)
S-R2D2: a spherical extension of the R2D2 deep neural network series paradigm for wide-field radio-interferometric imaging
Tajja, A., Aghabiglou, A., Tolley, E., Kneib, J-P., Thiran, J-P., Wiaux, Y.
Recently, the R2D2 paradigm, standing for ''Residual-to-Residual DNN series for high-Dynamic-range imaging'', was introduced for image formation in Radio Interferometry (RI) as a learned version of the traditional algorithm CLEAN. The first incarnations of R2D2 are limited to planar imaging on small fields of view, failing to meet the spherical-imaging requirement of modern telescopes observing wide fields. To address this limitation, we propose the spherical-imaging extension S-R2D2. Firstly, as R2D2, S-R2D2 encapsulates its minor cycles in existing 2D-Euclidean deep neural network (DNN) architectures, but adapts its iterative scheme to incorporate the wide-field measurement model mapping a spherical image to visibility data. We implemented this model as the composition of an efficient Fourier-based interpolator mapping the spherical image onto the equatorial plane, with the standard RI operator mapping the equatorial-plane image to visibility data. Importantly, the interpolation step must inevitably be performed at a lower-than-optimal resolution on the plane, to meet the high-resolution requirement on the sphere of wide-field imaging while preserving scalability. Therefore, secondly, we design S-R2D2's DNN training loss to jointly learn to correct the interpolation approximations and identify residual image structures on the sphere, ensuring consistency with the spherical ground truth using the adjoint plane-to-sphere interpolator. Finally, we demonstrate through simulations S-R2D2's capability to perform fast and accurate reconstructions of spherical monochromatic intensity images, across high-resolution, high-dynamic-range settings.
- Oceania > Australia (0.04)
- Europe > Switzerland > Vaud > Lausanne (0.04)
- Europe > Netherlands > North Holland > Haarlem (0.04)
- Europe > United Kingdom > Scotland > City of Edinburgh > Edinburgh (0.04)